Thesis seminar given by Pierre Garrigues, a graduate student in the Redwood Center for Theoretical Neuroscience at UC Berkeley. Given on April 10, 2009 at UC Berkeley.
Natural images, as complex and varied as they appear, have a sparse structure. That is, one can learn a basis set such that only a small fraction of the basis functions is necessary to describe a given image. Computing the sparse representation can be achieved by solving l1-regularized least-square, a problem commonly referred to as Basis Pursuit Denoising or Lasso.
I will first describe an algorithm to solve the Lasso with online observations, where I introduce an optimization problem that makes it possible to compute a homotopy from the current solution to the solution after observing a new data point. I will show how it can be used in compressive sensing of images, and in large scale sparse linear regression.
Finally, as the sparse coding model is not sufficient to describe the richness of image statistics, I will introduce a hierarchical sparse coding model where higher-order image structure is modeled and learned from data. The coefficients are distributed as a mixture of Laplace distributions, whose scale parameters control the statistical dependencies among the elements of the code.